Conversation
roborev: Combined ReviewVerdict: Not ready to merge — there is one high-severity auth exposure risk plus multiple medium-severity correctness/availability issues. High
Medium
Synthesized from 4 reviews (agents: codex, gemini | types: security, default) |
roborev: Combined ReviewVerdict: Changes are not ready to merge due to 1 High and 4 Medium-severity issues. High
Medium
Synthesized from 4 reviews (agents: codex, gemini | types: security, default) |
roborev: Combined ReviewVerdict: PR has one High and multiple Medium issues that should be addressed before merge. High
Medium
Synthesized from 4 reviews (agents: codex, gemini | types: security, default) |
roborev: Combined ReviewVerdict: Not ready to merge; reviewers identified 1 High and 5 Medium-severity issues. High
Medium
Synthesized from 4 reviews (agents: codex, gemini | types: security, default) |
|
If there are security concerns it keeps raising that don't make sense, feel free to add stuff to review_guidelines in .roborev.toml |
|
This PR implements TUI remote support (Issue #130) - enabling Changes:
Status: All tests pass locally. Just pushed a lint fix - waiting on CI. Ready for review when CI is green. |
roborev: Combined ReviewVerdict: 4 Medium-severity issues should be addressed before merge (1 availability risk, 3 regressions). Medium
Synthesized from 4 reviews (agents: gemini, codex | types: security, default) |
roborev: Combined ReviewVerdict: Not ready to merge; there are multiple Medium-severity issues that should be fixed first. Critical
High
Medium
Synthesized from 4 reviews (agents: gemini, codex | types: security, default) |
3cd30ee to
f5be139
Compare
roborev: Combined Review (
|
roborev: Combined Review (
|
roborev: Combined Review (
|
06668d6 to
2929cd3
Compare
roborev: Combined Review (
|
roborev: Combined Review (
|
roborev: Combined Review (
|
roborev: Combined Review (
|
7429f4d to
23fa7b5
Compare
roborev: Combined Review (
|
|
@EconoBen sorry I've been delayed! This branch is looking pretty good now. I will update my remote msgvault deployment and try this out and plan to get a 0.10.0 release out with this plus the IMAP support in the next 24 hours |
roborev: Combined Review (
|
Add remote query engine and HTTP API endpoints so the TUI can run against a remote msgvault server. Includes setup wizard enhancements, token export for headless OAuth, Docker CI/CD, and documentation. Co-Authored-By: Wes McKinney <wesmckinn+git@gmail.com>
- Check HasCompleteParquetData before creating DuckDB engine in serve, and pass s.DB() instead of nil so FTS5 deep search works remotely - Add source IDs to account API responses so the TUI account filter sends the correct source_id on subsequent queries - Add conversation_id to message summary and detail DTOs so thread navigation works in remote mode - Serialize EmptyValueTargets as comma-separated empty_targets query parameter so drill-downs into empty buckets work remotely Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fall back to query.NewSQLiteEngine when the Parquet cache is incomplete so remote TUI endpoints work on fresh installs - Add conversation_id to list/search SELECT columns and update scanMessageRows so summary responses populate ConversationID consistently with the detail endpoint Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
handleDeepSearch now parses filter params (source_id, hide_deleted, sender, etc.) and merges them into the search query via MergeFilterIntoQuery, matching the local TUI behavior. The remote engine forwards AccountID and HideDeleted as query parameters since they cannot be represented in search query string syntax. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Parse offset/limit directly in handleDeepSearch instead of using parseMessageFilter's 500-row default, which is designed for list endpoints. Deep search keeps its original 100-row default. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Use -1 sentinel in parseMessageFilter for unset limit so endpoints can apply their own defaults (100 for search, 500 for list) - Allow limit=0 in fast search for count-only requests from SearchFastCount, avoiding unnecessary message serialization - Replace misleading total/total_count with count+has_more in handleFilteredMessages and handleDeepSearch responses - Extend MergeFilterIntoQuery to carry After/Before date filters into deep search queries Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fetch limit+1 rows in handleFilteredMessages and handleDeepSearch to determine has_more accurately instead of guessing from page fill - Reject limit=0 in handleFilteredMessages (normalize to 500 default) so the response limit field matches actual behavior - Intersect date bounds in MergeFilterIntoQuery using max(after) and min(before) so user-supplied after:/before: cannot widen results beyond the current drill-down time context Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…te encoding - Check cacheNeedsBuild freshness before using DuckDB engine in serve - Add maxPageSize (500) constant and clamp parsed limits - Parse and serialize After/Before dates in aggregate and filter queries - Add After/Before to remote engine transport (buildFilterQuery, buildAggregateQuery, parseAggregateOptions, parseMessageFilter) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… UTC - cacheNeedsBuild now checks for messages soft-deleted after the last cache build, since incremental builds don't rewrite existing rows - Normalize RFC3339 timestamps with offsets to UTC in parseMessageFilter and parseAggregateOptions so SQL comparisons use consistent timezone Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Store LastSyncAt as UTC truncated to whole seconds, matching the
datetime('now') format used by deleted_from_source_at. Format the
comparison value as "YYYY-MM-DD HH:MM:SS" to ensure consistent
string comparison in the deletion staleness check.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Capture cacheWatermark before export starts so deletions during or after the build are detected as stale on the next freshness check - Move maxPageSize clamping from parseMessageFilter to handleFilteredMessages, exempting conversation_id thread fetches so the TUI can load full threads and detect truncation accurately Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Change deletion staleness check from > to >= so a deletion in the same second as the watermark is not missed (may cause one spurious rebuild, acceptable for correctness) - Restore maxPageSize cap in handleFastSearch, which was accidentally removed when clamping was moved out of parseMessageFilter Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- /messages/filter returns count/has_more, not total - /search/deep returns count/has_more, not total_count - /messages/filter default limit is 500, not 100 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
count < limit means has_more must be false. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
MergeFilterIntoQuery cannot represent sender_name, recipient_name, time_period, conversation_id, or empty_targets in search.Query. Return 400 instead of silently dropping these filters, which would let deep search escape the current drill-down scope. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
SenderName, RecipientName, ConversationID, and EmptyValueTargets are not honored by either the DuckDB or SQLite search paths. Return 400 instead of silently dropping them, matching the approach used in handleDeepSearch. Note: TimeRange.Period is supported by the DuckDB fast-search path (buildSearchConditions) so it is not rejected. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…earch - runScheduledSync now uses cacheNeedsBuild to decide whether to rebuild, covering deletions and label updates (not just additions) - MergeFilterIntoQuery converts TimeRange.Period to AfterDate/BeforeDate bounds so SQLite fast search honors time-bucket drill-down scope - Add timePeriodToBounds helper for year/month/day period strings Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Incremental builds only export rows with id > lastMessageID and cannot update or remove existing parquet shards. When cacheNeedsBuild reports deletions as the staleness reason, pass fullRebuild=true so all parquet data is rewritten with current deletion state. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace early-return + reason-string-parsing with a cacheStaleness struct that collects all signals (new messages, deletions, missing tables) before returning. This fixes a mixed add+delete sync where the new-messages check short-circuited the deletion check, causing an incremental rebuild that left deleted rows in parquet shards. The FullRebuild flag is set whenever deletions are detected or the cache is missing/empty, so callers no longer need to parse the human-readable Reason string. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
ddcf049 to
2cd0980
Compare
|
I have checked this out with my own remote deployment and just waiting for a clean bill of health to merge |
|
went ahead and merged this. I'm going to work on the 0.10.0 release notes |
The Problem
msgvault already supports remote CLI commands (
stats,search,show-message) against a NAS server. But the TUI—the primary way to explore your archive—only works locally. If your archive lives on a NAS, you can't browse it from your laptop.What This PR Does
Enables
msgvault tuito work transparently against a remote server. When[remote].urlis configured, the TUI connects to your NAS and gives you the full browsing experience: aggregate views, drill-down navigation, search, filtering—everything works identically to local mode.Implementation
Server side: 6 new API endpoints that expose the
query.Enginecapabilities needed by the TUI:/api/v1/aggregates– sender/domain/label/time aggregations/api/v1/aggregates/sub– drill-down into sub-aggregations/api/v1/messages/filter– filtered message lists with pagination/api/v1/stats/total– stats with filter context/api/v1/search/fast– metadata search (subject, sender, recipient)/api/v1/search/deep– full-text body search via FTS5Client side:
remote.Engineimplements the fullquery.Engineinterface over HTTP, so the TUI code doesn't change at all—it just gets a different engine.Safety: Deletion staging and attachment export are disabled in remote mode. These require local file access and are potentially destructive, so they're local-only operations.
Testing
Closes #130